23 research outputs found

    Neural Networks for Document Image and Text Processing

    Full text link
    Nowadays, the main libraries and document archives are investing a considerable effort on digitizing their collections. Indeed, most of them are scanning the documents and publishing the resulting images without their corresponding transcriptions. This seriously limits the document exploitation possibilities. When the transcription is necessary, it is manually performed by human experts, which is a very expensive and error-prone task. Obtaining transcriptions to the level of required quality demands the intervention of human experts to review and correct the resulting output of the recognition engines. To this end, it is extremely useful to provide interactive tools to obtain and edit the transcription. Although text recognition is the final goal, several previous steps (known as preprocessing) are necessary in order to get a fine transcription from a digitized image. Document cleaning, enhancement, and binarization (if they are needed) are the first stages of the recognition pipeline. Historical Handwritten Documents, in addition, show several degradations, stains, ink-trough and other artifacts. Therefore, more sophisticated and elaborate methods are required when dealing with these kind of documents, even expert supervision in some cases is needed. Once images have been cleaned, main zones of the image have to be detected: those that contain text and other parts such as images, decorations, versal letters. Moreover, the relations among them and the final text have to be detected. Those preprocessing steps are critical for the final performance of the system since an error at this point will be propagated during the rest of the transcription process. The ultimate goal of the Document Image Analysis pipeline is to receive the transcription of the text (Optical Character Recognition and Handwritten Text Recognition). During this thesis we aimed to improve the main stages of the recognition pipeline, from the scanned documents as input to the final transcription. We focused our effort on applying Neural Networks and deep learning techniques directly on the document images to extract suitable features that will be used by the different tasks dealt during the following work: Image Cleaning and Enhancement (Document Image Binarization), Layout Extraction, Text Line Extraction, Text Line Normalization and finally decoding (or text line recognition). As one can see, the following work focuses on small improvements through the several Document Image Analysis stages, but also deals with some of the real challenges: historical manuscripts and documents without clear layouts or very degraded documents. Neural Networks are a central topic for the whole work collected in this document. Different convolutional models have been applied for document image cleaning and enhancement. Connectionist models have been used, as well, for text line extraction: first, for detecting interest points and combining them in text segments and, finally, extracting the lines by means of aggregation techniques; and second, for pixel labeling to extract the main body area of the text and then the limits of the lines. For text line preprocessing, i.e., to normalize the text lines before recognizing them, similar models have been used to detect the main body area and then to height-normalize the images giving more importance to the central area of the text. Finally, Convolutional Neural Networks and deep multilayer perceptrons have been combined with hidden Markov models to improve our transcription engine significantly. The suitability of all these approaches has been tested with different corpora for any of the stages dealt, giving competitive results for most of the methodologies presented.Hoy en día, las principales librerías y archivos está invirtiendo un esfuerzo considerable en la digitalización de sus colecciones. De hecho, la mayoría están escaneando estos documentos y publicando únicamente las imágenes sin transcripciones, limitando seriamente la posibilidad de explotar estos documentos. Cuando la transcripción es necesaria, esta se realiza normalmente por expertos de forma manual, lo cual es una tarea costosa y propensa a errores. Si se utilizan sistemas de reconocimiento automático se necesita la intervención de expertos humanos para revisar y corregir la salida de estos motores de reconocimiento. Por ello, es extremadamente útil para proporcionar herramientas interactivas con el fin de generar y corregir la transcripciones. Aunque el reconocimiento de texto es el objetivo final del Análisis de Documentos, varios pasos previos (preprocesamiento) son necesarios para conseguir una buena transcripción a partir de una imagen digitalizada. La limpieza, mejora y binarización de las imágenes son las primeras etapas del proceso de reconocimiento. Además, los manuscritos históricos tienen una mayor dificultad en el preprocesamiento, puesto que pueden mostrar varios tipos de degradaciones, manchas, tinta a través del papel y demás dificultades. Por lo tanto, este tipo de documentos requiere métodos de preprocesamiento más sofisticados. En algunos casos, incluso, se precisa de la supervisión de expertos para garantizar buenos resultados en esta etapa. Una vez que las imágenes han sido limpiadas, las diferentes zonas de la imagen deben de ser localizadas: texto, gráficos, dibujos, decoraciones, letras versales, etc. Por otra parte, también es importante conocer las relaciones entre estas entidades. Estas etapas del pre-procesamiento son críticas para el rendimiento final del sistema, ya que los errores cometidos en aquí se propagarán al resto del proceso de transcripción. El objetivo principal del trabajo presentado en este documento es mejorar las principales etapas del proceso de reconocimiento completo: desde las imágenes escaneadas hasta la transcripción final. Nuestros esfuerzos se centran en aplicar técnicas de Redes Neuronales (ANNs) y aprendizaje profundo directamente sobre las imágenes de los documentos, con la intención de extraer características adecuadas para las diferentes tareas: Limpieza y Mejora de Documentos, Extracción de Líneas, Normalización de Líneas de Texto y, finalmente, transcripción del texto. Como se puede apreciar, el trabajo se centra en pequeñas mejoras en diferentes etapas del Análisis y Procesamiento de Documentos, pero también trata de abordar tareas más complejas: manuscritos históricos, o documentos que presentan degradaciones. Las ANNs y el aprendizaje profundo son uno de los temas centrales de esta tesis. Diferentes modelos neuronales convolucionales se han desarrollado para la limpieza y mejora de imágenes de documentos. También se han utilizado modelos conexionistas para la extracción de líneas: primero, para detectar puntos de interés y segmentos de texto y, agregarlos para extraer las líneas del documento; y en segundo lugar, etiquetando directamente los píxeles de la imagen para extraer la zona central del texto y así definir los límites de las líneas. Para el preproceso de las líneas de texto, es decir, la normalización del texto antes del reconocimiento final, se han utilizado modelos similares a los mencionados para detectar la zona central del texto. Las imagenes se rescalan a una altura fija dando más importancia a esta zona central. Por último, en cuanto a reconocimiento de escritura manuscrita, se han combinado técnicas de ANNs y aprendizaje profundo con Modelos Ocultos de Markov, mejorando significativamente los resultados obtenidos previamente por nuestro motor de reconocimiento. La idoneidad de todos estos enfoques han sido testeados con diferentes corpus en cada una de las tareas tratadas., obtenieAvui en dia, les principals llibreries i arxius històrics estan invertint un esforç considerable en la digitalització de les seues col·leccions de documents. De fet, la majoria estan escanejant aquests documents i publicant únicament les imatges sense les seues transcripcions, fet que limita seriosament la possibilitat d'explotació d'aquests documents. Quan la transcripció del text és necessària, normalment aquesta és realitzada per experts de forma manual, la qual cosa és una tasca costosa i pot provocar errors. Si s'utilitzen sistemes de reconeixement automàtic es necessita la intervenció d'experts humans per a revisar i corregir l'eixida d'aquests motors de reconeixement. Per aquest motiu, és extremadament útil proporcionar eines interactives amb la finalitat de generar i corregir les transcripcions generades pels motors de reconeixement. Tot i que el reconeixement del text és l'objectiu final de l'Anàlisi de Documents, diversos passos previs (coneguts com preprocessament) són necessaris per a l'obtenció de transcripcions acurades a partir d'imatges digitalitzades. La neteja, millora i binarització de les imatges (si calen) són les primeres etapes prèvies al reconeixement. A més a més, els manuscrits històrics presenten una major dificultat d'analisi i preprocessament, perquè poden mostrar diversos tipus de degradacions, taques, tinta a través del paper i altres peculiaritats. Per tant, aquest tipus de documents requereixen mètodes de preprocessament més sofisticats. En alguns casos, fins i tot, es precisa de la supervisió d'experts per a garantir bons resultats en aquesta etapa. Una vegada que les imatges han sigut netejades, les diferents zones de la imatge han de ser localitzades: text, gràfics, dibuixos, decoracions, versals, etc. D'altra banda, també és important conéixer les relacions entre aquestes entitats i el text que contenen. Aquestes etapes del preprocessament són crítiques per al rendiment final del sistema, ja que els errors comesos en aquest moment es propagaran a la resta del procés de transcripció. L'objectiu principal del treball que estem presentant és millorar les principals etapes del procés de reconeixement, és a dir, des de les imatges escanejades fins a l'obtenció final de la transcripció del text. Els nostres esforços se centren en aplicar tècniques de Xarxes Neuronals (ANNs) i aprenentatge profund directament sobre les imatges de documents, amb la intenció d'extraure característiques adequades per a les diferents tasques analitzades: neteja i millora de documents, extracció de línies, normalització de línies de text i, finalment, transcripció. Com es pot apreciar, el treball realitzat aplica xicotetes millores en diferents etapes de l'Anàlisi de Documents, però també tracta d'abordar tasques més complexes: manuscrits històrics, o documents que presenten degradacions. Les ANNs i l'aprenentatge profund són un dels temes centrals d'aquesta tesi. Diferents models neuronals convolucionals s'han desenvolupat per a la neteja i millora de les dels documents. També s'han utilitzat models connexionistes per a la tasca d'extracció de línies: primer, per a detectar punts d'interés i segments de text i, agregar-los per a extraure les línies del document; i en segon lloc, etiquetant directament els pixels de la imatge per a extraure la zona central del text i així definir els límits de les línies. Per al preprocés de les línies de text, és a dir, la normalització del text abans del reconeixement final, s'han utilitzat models similars als utilitzats per a l'extracció de línies. Finalment, quant al reconeixement d'escriptura manuscrita, s'han combinat tècniques de ANNs i aprenentatge profund amb Models Ocults de Markov, que han millorat significativament els resultats obtinguts prèviament pel nostre motor de reconeixement. La idoneïtat de tots aquests enfocaments han sigut testejats amb diferents corpus en cadascuna de les tasques tractadPastor Pellicer, J. (2017). Neural Networks for Document Image and Text Processing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90443TESI

    Modelado de lenguaje basado en categorías para Búsqueda de Respuesta dirigida por la Voz

    Full text link
    En los últimos años ha aumentado el interés de realizar tareas de Búsqueda de Respuesta utilizando voz en vez de sentencias escritas. En este trabajo se presenta una aproximación para el sistema de Búsqueda de Respuesta dirigida por la Voz, focalizándose el trabajo realizado en construir modelos de lenguaje capaces de incluir el mayor numero de palabras relevantes para la tarea (extraídas del repositorio de documentos donde se realizará la búsqueda de la respuesta). Estos modelos de lenguaje, deberán representar también la estructura sintáctica general de cuestiones típicas que se pueden preguntar al sistema. Se ha aplicado las técnicas mostradas en preguntas pertenecientes a las ediciones 2003-2006 del CLEF QA Contest.Pastor Pellicer, J. (2011). Modelado de lenguaje basado en categorías para Búsqueda de Respuesta dirigida por la Voz. http://hdl.handle.net/10251/15872Archivo delegad

    Language modelization and categorization for voice-activated QA

    Full text link
    The interest of the incorporation of voice interfaces to the Question Answering systems has increased in recent years. In this work, we present an approach to the Automatic Speech Recognition component of a Voice-Activated Question Answering system, focusing our interest in building a language model able to include as many relevant words from the document repository as possible, but also representing the general syntactic structure of typical questions. We have applied these technique to the recognition of questions of the CLEF QA 2003-2006 contests.Work partially supported by the Spanish MICINN under contract TIN2008-06856-C05-02, and by the Vicerrectorat d’Investigació, Desenvolupament i Innovació of the Universitat Politècnica de València under contract 20100982.Pastor Pellicer, J.; Hurtado Oliver, LF.; Segarra Soriano, E.; Sanchís Arnal, E. (2011). Language modelization and categorization for voice-activated QA. En Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. Springer Verlag (Germany). 7042(7042):475-482. https://doi.org/10.1007/978-3-642-25085-9_56S47548270427042Akiba, T., Itou, K., Fujii, A.: Language model adaptation for fixed phrases by amplifying partial n-gram sequences. Systems and Computers in Japan 38(4), 63–73 (2007)Atserias, J., Casas, B., Comelles, E., Gónzalez, M., Padró, L., Padró, M.: Freeling 1.3: Five years of open-source language processing tools. In: Proceedings of the 5th International Conference on Language Resources and Evaluation (2006)Carreras, X., Chao, I., Padró, L., Padró, M.: Freeling: An open-source suite of language analyzers. In: Proceedings of the 4th Language Resources and Evaluation Conference (2004)Castro-Bleda, M.J., España-Boquera, S., Marzal, A., Salvador, I.: Grapheme-to-phoneme conversion for the spanish language. In: Pattern Recognition and Image Analysis. Proceedings of the IX Spanish Symposium on Pattern Recognition and Image Analysis, pp. 397–402. Asociación Española de Reconocimiento de Formas y Análisis de Imágenes, Benicàssim (2001)Chu-Carroll, J., Prager, J.: An experimental study of the impact of information extraction accuracy on semantic search performance. In: Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, CIKM 2007, pp. 505–514. ACM (2007)Harabagiu, S., Moldovan, D., Picone, J.: Open-domain voice-activated question answering. In: Proceedings of the 19th International Conference on Computational Linguistics, COLING 2002, vol. 1, pp. 1–7. Association for Computational Linguistics (2002)Kim, D., Furui, S., Isozaki, H.: Language models and dialogue strategy for a voice QA system. In: 18th International Congress on Acoustics, Kyoto, Japan, pp. 3705–3708 (2004)Mishra, T., Bangalore, S.: Speech-driven query retrieval for question-answering. In: 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 5318–5321. IEEE (2010)Padró, L., Collado, M., Reese, S., Lloberes, M., Castellón, I.: Freeling 2.1: Five years of open-source language processing tools. In: Proceedings of 7th Language Resources and Evaluation Conference (2010)Rosso, P., Hurtado, L.F., Segarra, E., Sanchis, E.: On the voice-activated question answering. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews PP(99), 1–11 (2010)Sanchis, E., Buscaldi, D., Grau, S., Hurtado, L., Griol, D.: Spoken QA based on a Passage Retrieval engine. In: IEEE-ACL Workshop on Spoken Language Technology, Aruba, pp. 62–65 (2006

    The NoisyOffice Database: A Corpus To Train Supervised Machine Learning Filters For Image Processing

    Full text link
    [EN] This paper presents the `NoisyOffice¿ database. It consists of images of printed text documents with noise mainly caused by uncleanliness from a generic office, such as coffee stains and footprints on documents or folded and wrinkled sheets with degraded printed text. This corpus is intended to train and evaluate supervised learning methods for cleaning, binarization and enhancement of noisy images of grayscale text documents. As an example, several experiments of image enhancement and binarization are presented by using deep learning techniques. Also, double-resolution images are also provided for testing super-resolution methods. The corpus is freely available at UCI Machine Learning Repository. Finally, a challenge organized by Kaggle Inc. to denoise images, using the database, is described in order to show its suitability for benchmarking of image processing systems.This research was undertaken as part of the project TIN2017-85854-C4-2-R, jointly funded by the Spanish MINECO and FEDER founds.Castro-Bleda, MJ.; España Boquera, S.; Pastor Pellicer, J.; Zamora Martínez, FJ. (2020). The NoisyOffice Database: A Corpus To Train Supervised Machine Learning Filters For Image Processing. The Computer Journal. 63(11):1658-1667. https://doi.org/10.1093/comjnl/bxz098S165816676311Bozinovic, R. M., & Srihari, S. N. (1989). Off-line cursive script word recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(1), 68-83. doi:10.1109/34.23114Plamondon, R., & Srihari, S. N. (2000). Online and off-line handwriting recognition: a comprehensive survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1), 63-84. doi:10.1109/34.824821Vinciarelli, A. (2002). A survey on off-line Cursive Word Recognition. Pattern Recognition, 35(7), 1433-1446. doi:10.1016/s0031-3203(01)00129-7Impedovo, S. (2014). More than twenty years of advancements on Frontiers in handwriting recognition. Pattern Recognition, 47(3), 916-928. doi:10.1016/j.patcog.2013.05.027Baird, H. S. (2007). The State of the Art of Document Image Degradation Modelling. Advances in Pattern Recognition, 261-279. doi:10.1007/978-1-84628-726-8_12Egmont-Petersen, M., de Ridder, D., & Handels, H. (2002). Image processing with neural networks—a review. Pattern Recognition, 35(10), 2279-2301. doi:10.1016/s0031-3203(01)00178-9Marinai, S., Gori, M., & Soda, G. (2005). Artificial neural networks for document analysis and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(1), 23-35. doi:10.1109/tpami.2005.4Rehman, A., & Saba, T. (2012). Neural networks for document image preprocessing: state of the art. Artificial Intelligence Review, 42(2), 253-273. doi:10.1007/s10462-012-9337-zLazzara, G., & Géraud, T. (2013). Efficient multiscale Sauvola’s binarization. International Journal on Document Analysis and Recognition (IJDAR), 17(2), 105-123. doi:10.1007/s10032-013-0209-0Fischer, A., Indermühle, E., Bunke, H., Viehhauser, G., & Stolz, M. (2010). Ground truth creation for handwriting recognition in historical documents. Proceedings of the 8th IAPR International Workshop on Document Analysis Systems - DAS ’10. doi:10.1145/1815330.1815331Belhedi, A., & Marcotegui, B. (2016). Adaptive scene‐text binarisation on images captured by smartphones. IET Image Processing, 10(7), 515-523. doi:10.1049/iet-ipr.2015.0695Kieu, V. C., Visani, M., Journet, N., Mullot, R., & Domenger, J. P. (2013). An efficient parametrization of character degradation model for semi-synthetic image generation. Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing - HIP ’13. doi:10.1145/2501115.2501127Fischer, A., Visani, M., Kieu, V. C., & Suen, C. Y. (2013). Generation of learning samples for historical handwriting recognition using image degradation. Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing - HIP ’13. doi:10.1145/2501115.2501123Journet, N., Visani, M., Mansencal, B., Van-Cuong, K., & Billy, A. (2017). DocCreator: A New Software for Creating Synthetic Ground-Truthed Document Images. Journal of Imaging, 3(4), 62. doi:10.3390/jimaging3040062Walker, D., Lund, W., & Ringger, E. (2012). A synthetic document image dataset for developing and evaluating historical document processing methods. Document Recognition and Retrieval XIX. doi:10.1117/12.912203Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image Super-Resolution Using Deep Convolutional Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2), 295-307. doi:10.1109/tpami.2015.2439281Suzuki, K., Horiba, I., & Sugie, N. (2003). Neural edge enhancer for supervised edge enhancement from noisy images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12), 1582-1596. doi:10.1109/tpami.2003.1251151Hidalgo, J. L., España, S., Castro, M. J., & Pérez, J. A. (2005). Enhancement and Cleaning of Handwritten Data by Using Neural Networks. Lecture Notes in Computer Science, 376-383. doi:10.1007/11492429_46Pastor-Pellicer, J., España-Boquera, S., Zamora-Martínez, F., Afzal, M. Z., & Castro-Bleda, M. J. (2015). Insights on the Use of Convolutional Neural Networks for Document Image Binarization. Lecture Notes in Computer Science, 115-126. doi:10.1007/978-3-319-19222-2_10España-Boquera, S., Zamora-Martínez, F., Castro-Bleda, M. J., & Gorbe-Moya, J. (s. f.). Efficient BP Algorithms for General Feedforward Neural Networks. Lecture Notes in Computer Science, 327-336. doi:10.1007/978-3-540-73053-8_33Zamora-Martínez, F., España-Boquera, S., & Castro-Bleda, M. J. (s. f.). Behaviour-Based Clustering of Neural Networks Applied to Document Enhancement. Lecture Notes in Computer Science, 144-151. doi:10.1007/978-3-540-73007-1_18Graves, A., Fernández, S., & Schmidhuber, J. (2007). Multi-dimensional Recurrent Neural Networks. Artificial Neural Networks – ICANN 2007, 549-558. doi:10.1007/978-3-540-74690-4_56Sauvola, J., & Pietikäinen, M. (2000). Adaptive document image binarization. Pattern Recognition, 33(2), 225-236. doi:10.1016/s0031-3203(99)00055-2Pastor-Pellicer, J., Castro-Bleda, M. J., & Adelantado-Torres, J. L. (2015). esCam: A Mobile Application to Capture and Enhance Text Images. Lecture Notes in Computer Science, 601-604. doi:10.1007/978-3-319-19222-2_5

    Multimodal dialog system based on statistical models

    Get PDF
    En este trabajo presentamos un sistema de diálogo multimodal. Además de la multimodalidad de entrada y salida, la principal característica del sistema es que los módulos más importantes están basados en modelos estadísticos.In this paper, we present a multimodal dialog system. In addition to input and output multimodality, the main feature of the system is that its key modules are based on statistical models.Trabajo parcialmente subvencionado por el gobierno español con el proyecto TIN2008-06856-C05-02 y la Universitat Politècnica de València con el proyecto 20100982

    Sistema de diálogo multimodal basado en modelos estadísticos

    Full text link
    [ES] En este trabajo presentamos un sistema de diálogo multimodal. Además de la multimodalidad de entrada y salida, la principal característica del sistema es que los módulos más importantes están basados en modelos estadísticos.[EN] In this paper, we present a multimodal dialog system. In addition to input and output multimodality, the main feature of the system is that its key modules are based on statistical models.Sanchís Arnal, E.; Hurtado Oliver, LF.; Gómez Adrian, JA.; García Granada, F.; Pastor Pellicer, J.; Planells Lerma, J.; Segarra Soriano, E. (2011). Sistema de diálogo multimodal basado en modelos estadísticos. PROCESAMIENTO DEL LENGUAJE NATURAL. (47):325-326. http://hdl.handle.net/10251/28872S3253264

    Hyperoxemia and excess oxygen use in early acute respiratory distress syndrome : Insights from the LUNG SAFE study

    Get PDF
    Publisher Copyright: © 2020 The Author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.Background: Concerns exist regarding the prevalence and impact of unnecessary oxygen use in patients with acute respiratory distress syndrome (ARDS). We examined this issue in patients with ARDS enrolled in the Large observational study to UNderstand the Global impact of Severe Acute respiratory FailurE (LUNG SAFE) study. Methods: In this secondary analysis of the LUNG SAFE study, we wished to determine the prevalence and the outcomes associated with hyperoxemia on day 1, sustained hyperoxemia, and excessive oxygen use in patients with early ARDS. Patients who fulfilled criteria of ARDS on day 1 and day 2 of acute hypoxemic respiratory failure were categorized based on the presence of hyperoxemia (PaO2 > 100 mmHg) on day 1, sustained (i.e., present on day 1 and day 2) hyperoxemia, or excessive oxygen use (FIO2 ≥ 0.60 during hyperoxemia). Results: Of 2005 patients that met the inclusion criteria, 131 (6.5%) were hypoxemic (PaO2 < 55 mmHg), 607 (30%) had hyperoxemia on day 1, and 250 (12%) had sustained hyperoxemia. Excess FIO2 use occurred in 400 (66%) out of 607 patients with hyperoxemia. Excess FIO2 use decreased from day 1 to day 2 of ARDS, with most hyperoxemic patients on day 2 receiving relatively low FIO2. Multivariate analyses found no independent relationship between day 1 hyperoxemia, sustained hyperoxemia, or excess FIO2 use and adverse clinical outcomes. Mortality was 42% in patients with excess FIO2 use, compared to 39% in a propensity-matched sample of normoxemic (PaO2 55-100 mmHg) patients (P = 0.47). Conclusions: Hyperoxemia and excess oxygen use are both prevalent in early ARDS but are most often non-sustained. No relationship was found between hyperoxemia or excessive oxygen use and patient outcome in this cohort. Trial registration: LUNG-SAFE is registered with ClinicalTrials.gov, NCT02010073publishersversionPeer reviewe

    F-Measure as the error function to train neural networks

    Full text link
    Imbalance datasets impose serious problems in machine learning. For many tasks characterized by imbalanced data, the F-Measure seems more appropiate than the Mean Square Error or other error measures. This paper studies the use of F-Measure as the training criterion for Neural Networks by integrating it in the Error-Backpropagation algorithm. This novel training criterion has been validated empirically on a real task for which F-Measure is typically applied to evaluate the quality. The task consists in cleaning and enhancing ancient document images which is performed, in this work, by means of neural filters.This work has been partially supported by MICINN project HITITA (TIN2010-18958) and by the FPI-MICINN (BES-2011-046167) scholarship from Ministerio de Ciencia e Innovación, Gobierno de España.Pastor Pellicer, J.; Zamora Martínez, FJ.; España Boquera, S.; Castro-Bleda, MJ. (2013). F-Measure as the error function to train neural networks. En Advances in Computational Intelligence. Springer Verlag (Germany). 376-384. https://doi.org/10.1007/978-3-642-38679-4_37S376384Dembczyński, K., Waegeman, W., Cheng, W., Hüllermeier, E.: An exact algorithm for f-measure maximization. Advances in Neural Information Processing Systems 24, 223–230 (2011)Al-Haddad, L., Morris, C.W., Boddy, L.: Training radial basis function neural networks: effects of training set size and imbalanced training sets. J. of Microbiological Methods 43(1), 33–44 (2000)Bilmes, J., Asanovic, K., Chin, C.W., Demmel, J.: Using PHiPAC to speed error back-propagation learning. In: Proc. of ICASSP, vol. 5, pp. 4153–4156 (1997)Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley (2001)Gatos, B., Ntirogiannis, K., Pratikakis, I.: ICDAR 2009 document image binarization contest (DIBCO 2009). In: Proc. of ICDAR, pp. 1375–1382 (2009)Gatos, B., Ntirogiannis, K., Pratikakis, I.: DIBCO 2009: document image binarization contest. Int. J. on Document Analysis and Recognition 14(1), 35–44 (2011)Hidalgo, J.L., España, S., Castro, M.J., Pérez, J.A.: Enhancement and cleaning of handwritten data by using neural networks. In: Marques, J.S., Pérez de la Blanca, N., Pina, P. (eds.) IbPRIA 2005. LNCS, vol. 3522, pp. 376–383. Springer, Heidelberg (2005)Jansche, M.: Maximum expected f-measure training of logistic regression models. In: Proc. of HLT & EMNLP, pp. 692–699 (2005)Musicant, D.R., Kumar, V., Ozgur, A.: Optimizing f-measure with support vector machines. In: Proc. of Int. Florida AI Research Society Conference, pp. 356–360 (2003)Ntirogiannis, K., Gatos, B., Pratikakis, I.: A Performance Evaluation Methodology for Historical Document Image Binarization (2012)Pratikakis, I., Gatos, B., Ntirogiannis, K.: ICFHR 2012 Competition on Handwritten Document Image Binarization (H-DIBCO 2012) (2012)Pratikakis, I., Gatos, B., Ntirogiannis, K.: H-DIBCO 2010-handwritten document image binarization competition. In: Proc. of ICFHR, pp. 727–732 (2010)van Rijsbergen, C.J.: A theoretical basis for the use of co-occurrence data in information retrieval. J. of Documentation 33(2), 106–119 (1977)Wolf, C.: Document Ink Bleed-Through Removal with Two Hidden Markov Random Fields and a Single Observation Field. IEEE PAMI 32(3), 431–447 (2010)Zhou, Z.H., Liu, X.Y.: Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Trans. on Knowledge and Data Engineering 18(1), 63–77 (2006

    Immunocompromised patients with acute respiratory distress syndrome: Secondary analysis of the LUNG SAFE database

    Get PDF
    Background: The aim of this study was to describe data on epidemiology, ventilatory management, and outcome of acute respiratory distress syndrome (ARDS) in immunocompromised patients. Methods: We performed a post hoc analysis on the cohort of immunocompromised patients enrolled in the Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE) study. The LUNG SAFE study was an international, prospective study including hypoxemic patients in 459 ICUs from 50 countries across 5 continents. Results: Of 2813 patients with ARDS, 584 (20.8%) were immunocompromised, 38.9% of whom had an unspecified cause. Pneumonia, nonpulmonary sepsis, and noncardiogenic shock were their most common risk factors for ARDS. Hospital mortality was higher in immunocompromised than in immunocompetent patients (52.4% vs 36.2%; p &lt; 0.0001), despite similar severity of ARDS. Decisions regarding limiting life-sustaining measures were significantly more frequent in immunocompromised patients (27.1% vs 18.6%; p &lt; 0.0001). Use of noninvasive ventilation (NIV) as first-line treatment was higher in immunocompromised patients (20.9% vs 15.9%; p = 0.0048), and immunodeficiency remained independently associated with the use of NIV after adjustment for confounders. Forty-eight percent of the patients treated with NIV were intubated, and their mortality was not different from that of the patients invasively ventilated ab initio. Conclusions: Immunosuppression is frequent in patients with ARDS, and infections are the main risk factors for ARDS in these immunocompromised patients. Their management differs from that of immunocompetent patients, particularly the greater use of NIV as first-line ventilation strategy. Compared with immunocompetent subjects, they have higher mortality regardless of ARDS severity as well as a higher frequency of limitation of life-sustaining measures. Nonetheless, nearly half of these patients survive to hospital discharge. Trial registration: ClinicalTrials.gov, NCT02010073. Registered on 12 December 2013

    Grado de implementación de las estrategias preventivas del síndrome post-UCI: estudio observacional multicéntrico en España

    No full text
    corecore